381 research outputs found

    Dynamic Face Models: Construction and Applications

    Get PDF
    A thesis submitted to the University of London for the degree of Doctor of Philosoph

    Video Classification Using Spatial-Temporal Features and PCA

    Get PDF
    We investigate the problem of automated video classification by analysing the low-level audio-visual signal patterns along the time course in a holistic manner. Five popular TV broadcast genre are studied including sports, cartoon, news, commercial and music. A novel statistically based approach is proposed comprising two important ingredients designed for implicit semantic content characterisation and class identities modelling. First, a spatial-temporal audio-visual "super" feature vector is computed, capturing crucial clip-level video structure information inherent in a video genre. Second, the feature vector is further processed using Principal Component Analysis to reduce the spatial-temporal redundancy while exploiting the correlations between feature elements, which give rise to a compact representation for effective probabilistic modelling of each video genre. Extensive experiments are conducted assessing various aspects of the approach and their influence on the overall system performance

    Performance Analysis of UNet and Variants for Medical Image Segmentation

    Full text link
    Medical imaging plays a crucial role in modern healthcare by providing non-invasive visualisation of internal structures and abnormalities, enabling early disease detection, accurate diagnosis, and treatment planning. This study aims to explore the application of deep learning models, particularly focusing on the UNet architecture and its variants, in medical image segmentation. We seek to evaluate the performance of these models across various challenging medical image segmentation tasks, addressing issues such as image normalization, resizing, architecture choices, loss function design, and hyperparameter tuning. The findings reveal that the standard UNet, when extended with a deep network layer, is a proficient medical image segmentation model, while the Res-UNet and Attention Res-UNet architectures demonstrate smoother convergence and superior performance, particularly when handling fine image details. The study also addresses the challenge of high class imbalance through careful preprocessing and loss function definitions. We anticipate that the results of this study will provide useful insights for researchers seeking to apply these models to new medical imaging problems and offer guidance and best practices for their implementation

    Experience Goods and Consumer Search

    Get PDF
    We introduce a search model where products differ in variety and unobserved quality (`experience goods'), and firms can establish quality reputation. We show that the inability of consumers to observe quality before purchase significantly changes how search frictions affect market performance. In equilibrium, higher search costs hinder consumers' search for better-matched variety and increase price, but can boost firms' investment in product quality. Under plausible conditions, both consumer and total welfare initially increase in search cost, whereas both would monotonically decrease if quality were observable. We apply the analysis to online markets, where low search costs coexist with low-quality products

    Experience Goods and Consumer Search

    Get PDF
    We introduce a search model where products differ in variety and unobserved quality (`experience goods'), and firms can establish quality reputation. We show that the inability of consumers to observe quality before purchase significantly changes how search frictions affect market performance. In equilibrium, higher search costs hinder consumers' search for better-matched variety and increase price, but can boost firms' investment in product quality. Under plausible conditions, both consumer and total welfare initially increase in search cost, whereas both would monotonically decrease if quality were observable. We apply the analysis to online markets, where low search costs coexist with low-quality products

    Towards Enhancing In-Context Learning for Code Generation

    Full text link
    In-context learning (ICL) with pre-trained language models (PTLMs) has shown great success in code generation. ICL does not require training. PTLMs take as the input a prompt consisting of a few requirement-code examples and a new requirement, and output a new program. However, existing studies simply reuse ICL techniques for natural language generation and ignore unique features of code generation. We refer to these studies as standard ICL. Inspired by observations of the human coding process, we propose a novel ICL approach for code generation named AceCoder. Compared to standard ICL, AceCoder has two novelties. (1) Example retrieval. It retrieves similar programs as examples and learns programming skills (e.g., algorithms, APIs) from them. (2) Guided Code Generation. It encourages PTLMs to output an intermediate preliminary (e.g., test cases, APIs) before generating programs. The preliminary can help PTLMs understand requirements and guide the next code generation. We apply AceCoder to six PTLMs (e.g., Codex) and evaluate it on three public benchmarks using the Pass@k. Results show that AceCoder can significantly improve the performance of PTLMs on code generation. (1) In terms of Pass@1, AceCoder outperforms standard ICL by up to 79.7% and fine-tuned models by up to 171%. (2) AceCoder is effective in PTLMs with different sizes (e.g., 1B to 175B) and different languages (e.g., Python, Java, and JavaScript). (3) We investigate multiple choices of the intermediate preliminary. (4) We manually evaluate generated programs in three aspects and prove the superiority of AceCoder. (5) Finally, we discuss some insights about ICL for practitioners
    • …
    corecore